对象异常的检测对于工业过程至关重要,但是由于难以获得大量有缺陷的样本以及现实生活中无法预测的异常类型,因此无监督的异常检测和定位尤为重要。在现有的无监督异常检测和定位方法中,基于NF的方案取得了更好的结果。但是,两个子网(复杂函数)$ s_ {i}(u_ {i})$和$ t_ {i}(u_ {i})在nf中通常是多层的perceptrons,需要从2D扁平至1D,破坏了特征图中的空间位置关系并丢失空间结构信息。为了保留并有效提取空间结构信息,我们在这项研究中设计了一个复杂的函数模型,该模型具有交替的CBAM嵌入在堆叠的$ 3 \ times3 $全卷积中,该卷积能够保留并有效地在标准化流程模型中提取空间结构信息。 MVTEC AD数据集的广泛实验结果表明,Cainnflow基于CNN和Transformer Backbone网络作为特征提取器达到高级准确性和推理效率,并且Cainnflow可在MVTEC广告中获得$ 98.64 \%的像素级AUC $ 98.64 \%\%。
translated by 谷歌翻译
食源性疾病是一个严重但可以预防的公共卫生问题 - 延迟发现相关的暴发导致生产力损失,昂贵的召回,公共安全危害甚至生命丧失。尽管社交媒体是识别未报告的食源性疾病的有前途的来源,但缺乏标记的数据集来开发有效的爆发检测模型。为了加快基于机器学习的疫苗爆发检测模型的开发,我们提出了推文-FID(Tweet-Foodborne疾病检测),这是第一个用于多种食源性疾病事件检测任务的公开注释的数据集。从Twitter收集的Tweet-FID带有三个方面:Tweet类,实体类型和老虎机类型,并带有专家以及众包工人生产的标签。我们介绍了利用这三个方面的几个域任务:文本相关性分类(TRC),实体提及检测(EMD)和插槽填充(SF)。我们描述了用于支持这些任务模型开发的数据集设计,创建和标签的端到端方法。提供了这些任务的全面结果,以利用Tweet-FID数据集上的最新单项和多任务深度学习方法。该数据集为未来的Foodborne爆发检测提供了机会。
translated by 谷歌翻译
Anomaly detection on time series data is increasingly common across various industrial domains that monitor metrics in order to prevent potential accidents and economic losses. However, a scarcity of labeled data and ambiguous definitions of anomalies can complicate these efforts. Recent unsupervised machine learning methods have made remarkable progress in tackling this problem using either single-timestamp predictions or time series reconstructions. While traditionally considered separately, these methods are not mutually exclusive and can offer complementary perspectives on anomaly detection. This paper first highlights the successes and limitations of prediction-based and reconstruction-based methods with visualized time series signals and anomaly scores. We then propose AER (Auto-encoder with Regression), a joint model that combines a vanilla auto-encoder and an LSTM regressor to incorporate the successes and address the limitations of each method. Our model can produce bi-directional predictions while simultaneously reconstructing the original time series by optimizing a joint objective function. Furthermore, we propose several ways of combining the prediction and reconstruction errors through a series of ablation studies. Finally, we compare the performance of the AER architecture against two prediction-based methods and three reconstruction-based methods on 12 well-known univariate time series datasets from NASA, Yahoo, Numenta, and UCR. The results show that AER has the highest averaged F1 score across all datasets (a 23.5% improvement compared to ARIMA) while retaining a runtime similar to its vanilla auto-encoder and regressor components. Our model is available in Orion, an open-source benchmarking tool for time series anomaly detection.
translated by 谷歌翻译
本文提出了一种基于逆变器的Volt-VAR控制(IB-VVC)的一步两级深度强化学习(OSTC-DRL)方法。首先,考虑IB-VVC可以作为单周期优化问题进行配制,我们将IB-VVC作为单步马尔可夫决策过程而不是标准的Markov决策过程,从而简化了DRL学习任务。然后,我们设计了单步角色批判性DRL方案,该方案是最近DRL算法的简化版本,它可以成功地避免了Q值高估的问题。此外,考虑VVC的两个目标:最大程度地减少功率损耗并消除违反电压,我们利用两个批评家分别近似两个目标的回报。它简化了每个评论家的近似任务,并避免了评论家学习过程中两个目标之间的相互作用效果。 OSTC-DRL方法集成了单步角色批判性DRL方案和两批评技术。基于OSTC-DRL,我们设计了两种集中式DRL算法。此外,我们将OSTC-DRL扩展到分散的IB-VVC的多代理OSTC-DRL并设计两个多代理DRL算法。模拟表明,所提出的OSTC-DRL具有更快的收敛速度和更好的控制性能,并且多代理OSTC-DRL适用于分散的IB-VVC问题。
translated by 谷歌翻译
文件级关系提取旨在识别整个文件中实体之间的关系。捕获远程依赖性的努力大量依赖于通过(图)神经网络学习的隐式强大的表示,这使得模型不太透明。为了解决这一挑战,在本文中,我们通过学习逻辑规则提出了一种新的文档级关系提取的概率模型。 Logire将逻辑规则视为潜在变量,包括两个模块:规则生成器和关系提取器。规则生成器是生成可能导致最终预测的逻辑规则,并且关系提取器基于所生成的逻辑规则输出最终预测。可以通过期望最大化(EM)算法有效地优化这两个模块。通过将逻辑规则引入神经网络,Logire可以明确地捕获远程依赖项,并享受更好的解释。经验结果表明,Logire在关系性能(1.8 F1得分)和逻辑一致性(超过3.3逻辑得分)方面显着优于几种强大的基线。我们的代码可以在https://github.com/rudongyu/logire提供。
translated by 谷歌翻译
旨在识别来自新型类别的新型类别,几个参考样本,几次拍摄学习(FSL)是一个具有挑战性的问题。我们发现现有的作品通常通过混合所有本地级别来基于图像级功能来构建其几拍模型,这导致本地细节中的辨别位置偏差和信息丢失。为了解决问题,本文将返回本地级别功能的视角,并提出了一系列本地级策略。具体而言,我们展示(a)局域不可知的训练策略,以避免基本和新型类别之间的辨别位置偏差,(b)一种新的本地级相似度量,以捕获本地级别特征之间的准确比较(c )可以根据不同的位置特征综合来自基本类别的不同知识传输的本地级知识转移。广泛的实验证明,我们拟议的本地级别战略可以显着提高性能,并在不同的基准数据集中实现基线的2.8%-7.2%,这也实现了最先进的准确性。
translated by 谷歌翻译
Deep learning has shown impressive performance on hard perceptual problems. However, researchers found deep learning systems to be vulnerable to small, specially crafted perturbations that are imperceptible to humans. Such perturbations cause deep learning systems to mis-classify adversarial examples, with potentially disastrous consequences where safety or security is crucial. Prior defenses against adversarial examples either targeted specific attacks or were shown to be ineffective. We propose MagNet, a framework for defending neural network classifiers against adversarial examples. MagNet neither modifies the protected classifier nor requires knowledge of the process for generating adversarial examples. MagNet includes one or more separate detector networks and a reformer network. The detector networks learn to differentiate between normal and adversarial examples by approximating the manifold of normal examples. Since they assume no specific process for generating adversarial examples, they generalize well. The reformer network moves adversarial examples towards the manifold of normal examples, which is effective for correctly classifying adversarial examples with small perturbation. We discuss the intrinsic difficulties in defending against whitebox attack and propose a mechanism to defend against graybox attack. Inspired by the use of randomness in cryptography, we use diversity to strengthen MagNet. We show empirically that Mag-Net is effective against the most advanced state-of-the-art attacks in blackbox and graybox scenarios without sacrificing false positive rate on normal examples. CCS CONCEPTS• Security and privacy → Domain-specific security and privacy architectures; • Computing methodologies → Neural networks;
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译